Skip to content

Conversation

sosweetham
Copy link
Member

@sosweetham sosweetham commented Aug 8, 2025

Description of change

adds loki logging into the ecosystem

Issue Number

n/a

Type of change

  • New (a change which implements a new feature)

How the change has been tested

manual

Change checklist

  • I have ensured that the CI Checks pass locally
  • I have removed any unnecessary logic
  • My code is well documented
  • I have signed my commits
  • My code follows the pattern of the application
  • I have self reviewed my code

Summary by CodeRabbit

  • New Features
    • Centralized logging stack via Docker Compose (Loki, Promtail, Grafana) with an auto-provisioned data source.
    • Web3 adapter logs are now shipped to Loki for visualization in Grafana; dashboard accessible without login.
  • Bug Fixes
    • Auth page now renders the QR code only when data is available, preventing blank/invalid QR displays.
  • Chores
    • Default API server port changed to 1111.
    • Minor Docker Compose formatting cleanup.

Copy link
Contributor

coderabbitai bot commented Aug 8, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds a Loki/Promtail/Grafana compose stack; minor compose formatting fix; introduces pino+pino-loki logging for web3-adapter with a shared logger and transport; logs a change event in web3-adapter; tweaks pictique-api (debug log, default port 1111); guards QR rendering in a Svelte auth page.

Changes

Cohort / File(s) Summary of changes
Observability stack (Loki/Promtail/Grafana)
control-panel.compose.yml
New compose file adding Loki (3100), Promtail, and Grafana (3000) on a dedicated network with auto-provisioned Loki datasource and specific feature toggles and anonymous access.
Compose maintenance
evault.docker-compose.yml
Formatting cleanup: “networks:” key trimmed (no trailing space). No behavioral changes.
Web3 adapter logging
infrastructure/web3-adapter/package.json, infrastructure/web3-adapter/src/index.ts, infrastructure/web3-adapter/src/logging/*
Adds pino and pino-loki deps; introduces logging module (transport to Loki, shared logger export); logs change events in handleChange; adds logging index re-export aggregator.
Pictique API tweaks
platforms/pictique-api/src/database/data-source.ts, platforms/pictique-api/src/index.ts
Adds a startup console.log of PICTIQUE_DATABASE_URL; changes default server port to 1111.
Pictique web auth UI
platforms/pictique/src/routes/(auth)/auth/+page.svelte
Wraps QR code render in {#if qrData} to avoid rendering with undefined data.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Web3Adapter
  participant Logger as pino Logger
  participant Transport as pino-loki
  participant Loki
  participant Grafana as Grafana UI

  Client->>Web3Adapter: handleChange(event)
  Web3Adapter->>Logger: logger.info({dataId, tableName, ...})
  Logger->>Transport: log record
  Transport->>Loki: push stream (labels: app=web3-adapter)
  Grafana-->>Loki: query logs (datasource: Loki)
  Loki-->>Grafana: log results
Loading
sequenceDiagram
  participant Operator as docker compose
  participant Loki
  participant Promtail
  participant Grafana

  Operator->>Loki: start (port 3100)
  Operator->>Promtail: start (mount /var/log)
  Operator->>Grafana: start (provision Loki datasource)
  Promtail->>Loki: ship host logs
  Grafana->>Loki: query logs via provisioned DS
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • sosweetham

Poem

I thump the logs with fluffy might,
Paws print lines in Loki’s night.
Grafana stars begin to glow,
Web3 whispers, “off we go!”
QR waits—now data’s here—
Ports shift, ships steer—hippity cheer! 🐇📈


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e335f94 and a191899.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (1)
  • platforms/pictique/src/routes/(auth)/auth/+page.svelte (2 hunks)
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/evault-logging

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@sosweetham sosweetham marked this pull request as ready for review August 11, 2025 09:35
@sosweetham sosweetham requested a review from coodos as a code owner August 11, 2025 09:35
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (8)
db/.gitignore (1)

1-1: Good ignore; consider keeping the directory tracked

Ignoring db/data is correct for runtime artifacts. If the app expects the directory to exist, consider committing an empty placeholder:

  • Add db/data/.gitkeep and keep this ignore rule as-is.
  • Alternatively, add a comment explaining why data/* is ignored for future contributors.
infrastructure/web3-adapter/package.json (1)

40-42: Jest testMatch formatting updated, but repo runs Vitest — consolidate

Since scripts run Vitest, retaining Jest config is confusing. Either:

  • Switch scripts to run Jest; or
  • Remove the Jest config and devDeps.

This avoids divergent testing behavior between local dev and CI.

platforms/pictique/src/routes/(auth)/auth/+page.svelte (1)

50-76: Good guard on QR rendering; add SSE cleanup to avoid leaks

The {#if qrData} prevents initializing the QR action with undefined input. Also add an onMount cleanup to close the EventSource when navigating away.

Example:

 onMount(async () => {
   const { data } = await apiClient.get('/api/auth/offer');
   qrData = data.uri;

-  function watchEventStream(id: string) {
-    const sseUrl = new URL(`/api/auth/sessions/${id}`, PUBLIC_PICTIQUE_BASE_URL).toString();
-    const eventSource = new EventSource(sseUrl);
+  function watchEventStream(id: string) {
+    const sseUrl = new URL(`/api/auth/sessions/${id}`, PUBLIC_PICTIQUE_BASE_URL).toString();
+    const eventSource = new EventSource(sseUrl);
     eventSource.onopen = () => { console.log('Successfully connected.') };
+    eventSource.onerror = (e) => {
+      console.error('SSE error', e);
+      // optionally eventSource.close();
+    };
     eventSource.onmessage = (e) => {
       const data = JSON.parse(e.data as string);
       const { user } = data;
       setAuthId(user.id);
       const { token } = data;
       setAuthToken(token);
       goto('/home');
     };
+    return eventSource;
   }

-  watchEventStream(new URL(qrData).searchParams.get('session') as string);
+  const es = watchEventStream(new URL(qrData).searchParams.get('session') as string);
+  return () => es?.close();
 });
control-panel.compose.yml (2)

15-19: Promtail config source is ambiguous; consider mounting an explicit config

Command points to /etc/promtail/config.yml but nothing mounts it; relying on image defaults can be brittle.

  • Commit a promtail-config.yml and mount it:
     promtail:
         image: grafana/promtail:latest
-        volumes:
-            - /var/log:/var/log
-        command: -config.file=/etc/promtail/config.yml
+        volumes:
+            - ./observability/promtail-config.yml:/etc/promtail/config.yml:ro
+            - /var/log:/var/log:ro
+            # If scraping Docker logs:
+            # - /var/lib/docker/containers:/var/lib/docker/containers:ro
+            # - /var/run/docker.sock:/var/run/docker.sock:ro
+        command: -config.file=/etc/promtail/config.yml

Would you like a starter promtail-config.yml for system and Docker logs?


37-37: Trim trailing space flagged by YAMLlint

Line 37 has trailing whitespace after proxy.

-                  access: proxy 
+                  access: proxy
infrastructure/web3-adapter/src/index.ts (2)

311-317: Use pino’s message parameter and consider debug level

Current shape logs a message field inside the object. Prefer a structured object plus a message string for better compatibility with Pino/Loki dashboards, and consider lowering to debug if this fires frequently.

-        logger.info({
-            message: "Handling change",
-            dataId: data.id,
-            tableName,
-            existingGlobalId,
-            participants,
-        });
+        logger.debug(
+            {
+                dataId: data.id,
+                tableName,
+                existingGlobalId,
+                participants,
+            },
+            "Handling change",
+        );

Also ensure none of these fields are sensitive for logging to a central store. If there’s any risk, we can redact them in the logger config.


308-310: Remove stray debug log or route through logger

console.log("We get here?") looks like a leftover. Either remove it or switch to logger.debug for consistency.

-        console.log("We get here?");
+        // logger.debug("Reached handleChange pre-check");
infrastructure/web3-adapter/src/logging/index.ts (1)

1-2: Consider not exporting transport from the public API

Re-exporting the transport exposes internals and invites tight coupling. Prefer exporting only the logger unless there’s a concrete use case.

-export * from "./transport";
 export * from "./logger";

If you keep exporting transport, mark it as internal in docs and avoid runtime mutation.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f1884d1 and e335f94.

⛔ Files ignored due to path filters (2)
  • infrastructure/eid-wallet/src-tauri/gen/android/.idea/AndroidProjectSystem.xml is excluded by !**/gen/**
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (12)
  • control-panel.compose.yml (1 hunks)
  • db/.gitignore (1 hunks)
  • evault.docker-compose.yml (1 hunks)
  • infrastructure/web3-adapter/package.json (2 hunks)
  • infrastructure/web3-adapter/src/index.ts (2 hunks)
  • infrastructure/web3-adapter/src/logging/index.ts (1 hunks)
  • infrastructure/web3-adapter/src/logging/logger.ts (1 hunks)
  • infrastructure/web3-adapter/src/logging/transport.ts (1 hunks)
  • platforms/pictique-api/src/database/data-source.ts (1 hunks)
  • platforms/pictique-api/src/index.ts (1 hunks)
  • platforms/pictique/src/routes/(auth)/auth/+page.svelte (2 hunks)
  • platforms/registry/src/index.ts (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
infrastructure/web3-adapter/src/logging/logger.ts (1)
infrastructure/web3-adapter/src/logging/transport.ts (1)
  • transport (4-16)
🪛 YAMLlint (1.37.1)
control-panel.compose.yml

[error] 37-37: trailing spaces

(trailing-spaces)

🔇 Additional comments (2)
evault.docker-compose.yml (1)

19-20: No functional change — OK to merge

Whitespace-only adjustment; service/network definitions remain intact.

infrastructure/web3-adapter/src/index.ts (1)

7-7: LGTM: central logger import

Importing the shared logger via the logging aggregator is correct.

Comment on lines +5 to +12
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- loki

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add persistence and restart policies

Currently all data (Loki indexes/chunks, Grafana dashboards) are ephemeral and services won't restart on failure.

  • Add restart policies and persistent volumes:
     loki:
         image: grafana/loki:latest
+        restart: unless-stopped
+        volumes:
+            - loki-data:/loki
...
     grafana:
+        restart: unless-stopped
+        volumes:
+            - grafana-data:/var/lib/grafana
  • Declare volumes at the end of the file:
 networks:
     loki:
 
 services:
   ...
+volumes:
+  loki-data:
+  grafana-data:

Optionally add healthchecks once validated tools exist in the images.

Also applies to: 21-50

🤖 Prompt for AI Agents
In control-panel.compose.yml around lines 5–12, the loki service is ephemeral
and has no restart policy or persistent storage; update the service to include a
restart policy (e.g., restart: unless-stopped or restart: always) and attach a
named volume for Loki data (mount a volume such as loki_data to the container
path used for chunks/indexes), then declare that named volume at the bottom of
the compose file under volumes:. Do the same for Grafana and any other services
referenced in lines 21–50 (add restart policy and a named volume like
grafana_data mounted to /var/lib/grafana), and optionally add healthcheck blocks
after verifying the images support the checks.


services:
loki:
image: grafana/loki:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Pin images to specific versions or digests instead of latest

Using :latest is non-deterministic and can break deployments unexpectedly.

  • Pin to known-good versions or digests, preferably via a .env file:
-        image: grafana/loki:latest
+        image: grafana/loki:${LOKI_TAG:-2.9.0}

-        image: grafana/promtail:latest
+        image: grafana/promtail:${PROMTAIL_TAG:-2.9.0}

-        image: grafana/grafana:latest
+        image: grafana/grafana:${GRAFANA_TAG:-10.4.0}

Please adjust tags to your validated versions.

Also applies to: 14-14, 46-46

🤖 Prompt for AI Agents
In control-panel.compose.yml around lines 6, 14 and 46, the service image
references use the :latest tag which is non-deterministic; update each image
line to a specific validated tag or immutable digest (e.g., image:
grafana/loki:<version> or image: grafana/loki@sha256:<digest>) and wire those
versions into a .env file (or compose variables) so deployments are
reproducible; replace :latest occurrences at the noted lines with the chosen
tags/digests and update documentation/.env with the validated versions.

loki:
image: grafana/loki:latest
ports:
- "3100:3100"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Limit public exposure of Loki/Grafana for local stacks

Bind to localhost to avoid exposing UIs and APIs on all interfaces.

-            - "3100:3100"
+            - "127.0.0.1:3100:3100"
...
-            - "3000:3000"
+            - "127.0.0.1:3000:3000"

Also applies to: 47-48

🤖 Prompt for AI Agents
In control-panel.compose.yml around line 8 (and also lines 47-48), the service
port mappings currently expose Loki/Grafana on all network interfaces
("3100:3100" etc.); change each public mapping to bind to localhost by prefixing
the host IP (for example 127.0.0.1:3100:3100 and similarly for the other ports
on lines 47-48) so the UIs/APIs are only accessible from the local machine.

Comment on lines +24 to +26
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_FEATURE_TOGGLES_ENABLE=alertingSimplifiedRouting,alertingQueryAndExpressionsStepMode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Anonymous admin in Grafana is unsafe by default

GF_AUTH_ANONYMOUS_ENABLED=true with GF_AUTH_ANONYMOUS_ORG_ROLE=Admin grants full admin to anyone who can hit port 3000. For anything beyond strictly local, disable anonymous auth or reduce role to Viewer and set admin creds.

Suggested hardening:

  • Set GF_AUTH_ANONYMOUS_ENABLED=false (or GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer).
  • Provide GF_SECURITY_ADMIN_USER / GF_SECURITY_ADMIN_PASSWORD via env (not committed), or mount a secure provisioning.
  • Bind the port to 127.0.0.1 for local-only access.
-            - GF_AUTH_ANONYMOUS_ENABLED=true
-            - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
+            - GF_AUTH_ANONYMOUS_ENABLED=false
+            # For local dev with anonymous viewer:
+            # - GF_AUTH_ANONYMOUS_ENABLED=true
+            # - GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_FEATURE_TOGGLES_ENABLE=alertingSimplifiedRouting,alertingQueryAndExpressionsStepMode
- GF_AUTH_ANONYMOUS_ENABLED=false
# For local dev with anonymous viewer:
# - GF_AUTH_ANONYMOUS_ENABLED=true
# - GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer
- GF_FEATURE_TOGGLES_ENABLE=alertingSimplifiedRouting,alertingQueryAndExpressionsStepMode
🤖 Prompt for AI Agents
In control-panel.compose.yml around lines 24 to 26, anonymous Grafana access is
enabled with GF_AUTH_ANONYMOUS_ENABLED=true and GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
which grants admin rights to anyone hitting port 3000; change
GF_AUTH_ANONYMOUS_ENABLED=false or at minimum set
GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer, add GF_SECURITY_ADMIN_USER and
GF_SECURITY_ADMIN_PASSWORD via environment variables provided at deploy-time
(use Docker secrets, an env file not committed to VCS, or a mounted provisioning
file) and ensure the Grafana service binds only to localhost (127.0.0.1) for
non-public deployments to harden access.

Comment on lines +21 to +22
"pino": "^9.8.0",
"pino-loki": "^2.6.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Logging deps look good; clean up test config drift and stray dependency

pino/pino-loki additions are appropriate. However:

  • The project uses Vitest in scripts, but still includes Jest config and devDependencies.
  • There is a dependency named "test": "^3.3.0" which is likely accidental.

Please consolidate on a single test runner and remove unused packages/config.

Suggested clean-up (if standardizing on Vitest):

   "dependencies": {
@@
-    "test": "^3.3.0",
+    // removed: "test" dependency appears unintended
   },
   "devDependencies": {
-    "@types/jest": "^29.5.0",
-    "jest": "^29.5.0",
-    "ts-jest": "^29.1.0",
+    // removed Jest-related packages; using Vitest
   },
-  "jest": {
-    "preset": "ts-jest",
-    "testEnvironment": "node",
-    "testMatch": [
-      "**/__tests__/**/*.test.ts"
-    ]
-  }
+  // removed Jest config; using Vitest

Optional verification:


🏁 Script executed:

#!/bin/bash
# Confirm Jest is unused and find stray 'test' dependency usage
rg -n --no-heading 'from .*jest|describe\\(|it\\(|test\\(|@jest' -g '!node_modules'
jq -r '.dependencies.test // empty' infrastructure/web3-adapter/package.json

Length of output: 302


I’ll correct the regex and inspect both code usage and package.json scripts/deps:


🏁 Script executed:

#!/bin/bash
set -e
echo "Searching code for any Jest usage patterns..."
rg -n --no-heading \
   -e 'from .*jest' \
   -e 'describe\(' \
   -e 'it\(' \
   -e 'test\(' \
   -e '@jest' \
   --glob '*.{ts,tsx,js,jsx}'

echo
echo "Current scripts in package.json:"
jq .scripts infrastructure/web3-adapter/package.json

echo
echo "Relevant deps in package.json:"
jq '{ dependencies: .dependencies, devDependencies: .devDependencies }' infrastructure/web3-adapter/package.json | \
  jq '{ testDependency: .dependencies.test, vitest: .devDependencies.vitest, jest: .devDependencies.jest, tsJest: .devDependencies["ts-jest"], typesJest: .devDependencies["@types/jest"] }'

Length of output: 19836


Consolidate on Vitest: remove Jest, stray “test” dep, and add Vitest

All code is wired for Vitest (see "test": "vitest"), but the package still pulls in Jest and a stray "test": "^3.3.0" dependency. Please update infrastructure/web3-adapter/package.json as follows:

  • Remove from dependencies
    "test": "^3.3.0"
  • In devDependencies, drop:
    "jest": "^29.5.0"
    "ts-jest": "^29.1.0"
    "@types/jest": "^29.5.0"
  • Remove the top-level "jest": { … } config block, if present.
  • Add Vitest to devDependencies, e.g.:
    "vitest": "^0.34.1"
    (match your repo’s versioning conventions)
  • Ensure your CI and docs reference npm test → Vitest.

After these changes, run npm install (or pnpm/ yarn) to update your lockfile.

🤖 Prompt for AI Agents
In infrastructure/web3-adapter/package.json around lines 21-22, the package
still depends on stray test and Jest packages while the project is wired to
Vitest; remove the dependency "test": "^3.3.0" from dependencies, remove "jest",
"ts-jest" and "@types/jest" from devDependencies, delete any top-level "jest": {
... } config block if present, add "vitest": "^0.34.1" (or the repo's chosen
Vitest version) to devDependencies, update the "test" script to run Vitest
(e.g., "vitest" or "vitest run") and then run npm/yarn/pnpm install to refresh
the lockfile and ensure CI/docs reference npm test → Vitest.

import pino from "pino";
import { transport } from "./transport";

export const logger = pino(transport);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Configure log level and redact sensitive fields

Initialize Pino with explicit level and redaction for safety; keep your existing Loki transport.

-export const logger = pino(transport);
+export const logger = pino(
+  {
+    level: process.env.LOG_LEVEL || "info",
+    // Redact commonly sensitive keys; extend as needed
+    redact: {
+      paths: ["password", "token", "authorization", "auth", "*.password", "*.token", "*.authorization"],
+      remove: true,
+    },
+  },
+  transport,
+);

Note: ensure non-dev deployments do not rely on default Loki basicAuth credentials; set LOKI_USERNAME/LOKI_PASSWORD securely.


Configure Pino logger with default level and redact sensitive fields

To ensure consistent log verbosity and prevent accidental leakage of secrets, initialize Pino with an explicit level and a redact configuration. Update the logger definition in:

  • infrastructure/web3-adapter/src/logging/logger.ts (line 4)

Apply this diff:

-export const logger = pino(transport);
+export const logger = pino(
+  {
+    level: process.env.LOG_LEVEL || "info",
+    // Redact commonly sensitive keys; extend as needed
+    redact: {
+      paths: [
+        "password",
+        "token",
+        "authorization",
+        "auth",
+        "*.password",
+        "*.token",
+        "*.authorization"
+      ],
+      remove: true,
+    },
+  },
+  transport,
+);

Also verify that non-dev deployments supply LOKI_USERNAME/LOKI_PASSWORD via secure env vars rather than relying on any defaults.

🤖 Prompt for AI Agents
In infrastructure/web3-adapter/src/logging/logger.ts around line 4, the Pino
instance is created without an explicit log level or redact rules; initialize
pino with an options object that sets level from process.env.LOG_LEVEL (fallback
to "info") and a redact array of common sensitive keys (e.g.
"req.headers.authorization", "req.headers.cookie", "res.headers", "password",
"pass", "token", "secret", "apiKey", "loki.*") while keeping the existing
transport; also add a runtime check that when NODE_ENV !== "development" the
process ensures LOKI_USERNAME and LOKI_PASSWORD are provided via secure env vars
(throw or exit with a clear error if missing) rather than relying on defaults.

Comment on lines +4 to +16
export const transport = pinoTransport<LokiOptions>({
target: "pino-loki",
options: {
host: process.env.LOKI_URL || "http://localhost:3100",
labels: {
app: "web3-adapter",
},
basicAuth: {
username: process.env.LOKI_USERNAME || "admin",
password: process.env.LOKI_PASSWORD || "admin",
},
},
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix TypeScript usage of pino.transport and harden auth defaults

  • pino.transport is not generic; pinoTransport<LokiOptions>(...) won’t type-check.
  • Avoid admin/admin defaults outside development.

Apply:

-import type { LokiOptions } from "pino-loki";
+import type { LokiOptions } from "pino-loki";

-export const transport = pinoTransport<LokiOptions>({
-    target: "pino-loki",
-    options: {
-        host: process.env.LOKI_URL || "http://localhost:3100",
-        labels: { app: "web3-adapter" },
-        basicAuth: {
-            username: process.env.LOKI_USERNAME || "admin",
-            password: process.env.LOKI_PASSWORD || "admin",
-        },
-    },
-});
+const lokiOptions: LokiOptions = {
+  host: process.env.LOKI_URL || "http://localhost:3100",
+  labels: { app: "web3-adapter" },
+  basicAuth: {
+    username: process.env.LOKI_USERNAME || "admin",
+    password: process.env.LOKI_PASSWORD || "admin",
+  },
+};
+export const transport = pinoTransport({
+  target: "pino-loki",
+  options: lokiOptions,
+});

Optional: dev-only default credentials

const isDev = process.env.NODE_ENV !== "production";
const lokiUser = process.env.LOKI_USERNAME ?? (isDev ? "admin" : undefined);
const lokiPass = process.env.LOKI_PASSWORD ?? (isDev ? "admin" : undefined);
// throw if missing in prod
if (!isDev && (!lokiUser || !lokiPass)) {
  throw new Error("LOKI_USERNAME/LOKI_PASSWORD are required in production");
}

Please verify that control-panel.compose.yml provides LOKI_URL/creds to this service.


🏁 Script executed:

#!/bin/bash
# Check for LOKI_* env wiring and where logger is consumed
rg -n --no-heading 'LOKI_URL|LOKI_USERNAME|LOKI_PASSWORD|logger\.info|pino-loki'

Length of output: 1709


Fix pinoTransport usage and lock down credentials

  • pinoTransport isn’t a generic function—move the LokiOptions type onto a separate lokiOptions object.
  • Don’t fall back to admin/admin in production—throw if credentials are missing.
  • Verify that LOKI_URL, LOKI_USERNAME, and LOKI_PASSWORD are supplied by your deployment (I didn’t see a Compose/K8s YAML in the repo).

Apply in infrastructure/web3-adapter/src/logging/transport.ts:

-import type { LokiOptions } from "pino-loki";
-
-export const transport = pinoTransport<LokiOptions>({
-  target: "pino-loki",
-  options: {
-    host: process.env.LOKI_URL || "http://localhost:3100",
-    labels: { app: "web3-adapter" },
-    basicAuth: {
-      username: process.env.LOKI_USERNAME || "admin",
-      password: process.env.LOKI_PASSWORD || "admin",
-    },
-  },
-});
+import type { LokiOptions } from "pino-loki";
+
+// build and validate our Loki options
+const lokiOptions: LokiOptions = {
+  host: process.env.LOKI_URL || "http://localhost:3100",
+  labels: { app: "web3-adapter" },
+  basicAuth: {
+    // in prod these must be set, only default in dev
+    username: process.env.LOKI_USERNAME ?? (process.env.NODE_ENV !== "production" ? "admin" : undefined),
+    password: process.env.LOKI_PASSWORD ?? (process.env.NODE_ENV !== "production" ? "admin" : undefined),
+  },
+};
+
+// enforce credentials in production
+if (process.env.NODE_ENV === "production" && (!lokiOptions.basicAuth.username || !lokiOptions.basicAuth.password)) {
+  throw new Error("LOKI_USERNAME and LOKI_PASSWORD are required in production");
+}
+
+export const transport = pinoTransport({
+  target: "pino-loki",
+  options: lokiOptions,
+});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const transport = pinoTransport<LokiOptions>({
target: "pino-loki",
options: {
host: process.env.LOKI_URL || "http://localhost:3100",
labels: {
app: "web3-adapter",
},
basicAuth: {
username: process.env.LOKI_USERNAME || "admin",
password: process.env.LOKI_PASSWORD || "admin",
},
},
});
import type { LokiOptions } from "pino-loki";
// (preserve your existing `import { pinoTransport } from "pino";` above)
export const transport = pinoTransport<LokiOptions>({
target: "pino-loki",
options: {
host: process.env.LOKI_URL || "http://localhost:3100",
labels: { app: "web3-adapter" },
basicAuth: {
username: process.env.LOKI_USERNAME || "admin",
password: process.env.LOKI_PASSWORD || "admin",
},
},
});
+// build and validate our Loki options
+const lokiOptions: LokiOptions = {
+ host: process.env.LOKI_URL || "http://localhost:3100",
+ labels: { app: "web3-adapter" },
+ basicAuth: {
+ // in prod these must be set, only default in dev
+ username: process.env.LOKI_USERNAME ?? (process.env.NODE_ENV !== "production" ? "admin" : undefined),
+ password: process.env.LOKI_PASSWORD ?? (process.env.NODE_ENV !== "production" ? "admin" : undefined),
+ },
+};
+
+// enforce credentials in production
+if (
+ process.env.NODE_ENV === "production" &&
+ (!lokiOptions.basicAuth.username || !lokiOptions.basicAuth.password)
+) {
+ throw new Error("LOKI_USERNAME and LOKI_PASSWORD are required in production");
+}
+
+export const transport = pinoTransport({
+ target: "pino-loki",
+ options: lokiOptions,
+});
🤖 Prompt for AI Agents
In infrastructure/web3-adapter/src/logging/transport.ts around lines 4 to 16,
pinoTransport is incorrectly used as a generic and credentials are insecurely
defaulted; replace the generic usage by creating a separate const lokiOptions:
LokiOptions = { host: process.env.LOKI_URL || "http://localhost:3100", labels: {
app: "web3-adapter" }, basicAuth: { username: process.env.LOKI_USERNAME,
password: process.env.LOKI_PASSWORD } } and pass lokiOptions to pinoTransport
(no generic), and add runtime checks that throw an error if LOKI_USERNAME or
LOKI_PASSWORD are missing in non-development environments (do not fall back to
"admin"/"admin"); ensure deployment provides
LOKI_URL/LOKI_USERNAME/LOKI_PASSWORD.


config({ path: path.resolve(__dirname, "../../../../.env") });

console.log("SAODIHUOPIFHDSA", process.env.PICTIQUE_DATABASE_URL)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove sensitive database URL logging

Printing the full PICTIQUE_DATABASE_URL can leak credentials to logs (stdout, CI, Loki/Grafana).

Please remove this line:

-console.log("SAODIHUOPIFHDSA", process.env.PICTIQUE_DATABASE_URL)

If you need a sanity check, log a non-sensitive indicator instead (e.g., presence or a masked DSN host).

I can add a small utility to safely mask DSNs if you prefer.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.log("SAODIHUOPIFHDSA", process.env.PICTIQUE_DATABASE_URL)
🤖 Prompt for AI Agents
In platforms/pictique-api/src/database/data-source.ts around line 15, remove the
console.log that prints process.env.PICTIQUE_DATABASE_URL (it exposes sensitive
credentials); delete that line and, if you need a sanity check, replace it with
a non-sensitive indicator such as logging whether the variable is present (e.g.,
!!process.env.PICTIQUE_DATABASE_URL) or the masked DSN host using a small
sanitizer utility that parses the URL and returns only safe bits (host or a
masked string). Ensure no full DSN or credentials are ever logged.


const app = express();
const port = process.env.PORT || 3000;
const port = process.env.PORT || 1111;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Coerce PORT to a number and verify infra/docs for new default 1111

Passing a string PORT to app.listen can be error-prone. Also, changing the fallback port can affect local/dev setups if PORT isn’t set.

Apply:

-const port = process.env.PORT || 1111;
+const port = Number(process.env.PORT) || 1111;

Please verify:

  • Docker/compose/k8s map the correct port for pictique-api.
  • Any docs or scripts assuming 3000 are updated.
  • Upstream clients/proxies (if any) target the new default.

🏁 Script executed:

#!/bin/bash
# Find places that assume 3000 or reference pictique-api port
rg -n --no-heading -A2 -B2 $'pictique-api|:3000|PORT=3000'

Length of output: 3743


Coerce PORT to a number in pictique-api

We didn’t find any Docker/compose files, scripts or docs referencing port 3000 for pictique-api, so bumping the default to 1111 has no downstream impact. Please apply the numeric coercion:

– const port = process.env.PORT || 1111;
+ const port = Number(process.env.PORT) || 1111;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const port = process.env.PORT || 1111;
- const port = process.env.PORT || 1111;
+ const port = Number(process.env.PORT) || 1111;
🤖 Prompt for AI Agents
In platforms/pictique-api/src/index.ts around line 19, the PORT value is
currently left as a string; coerce it to a number and ensure a safe fallback by
parsing the env var with a radix and defaulting to 1111 if parsing fails.
Replace the string assignment with a numeric parse (e.g.
parseInt(process.env.PORT ?? '1111', 10)) or equivalent and handle NaN by
falling back to 1111 so the app always gets a valid numeric port.

server.log.info("Database connection initialized");
} catch (error) {
server.log.error("Error during database initialization:", error);
server.log.error({message: "Error during database initialization", detail: error});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Preserve error stack/metadata in logs

With Fastify/Pino, pass the error directly or under err so serializers capture stack and type.

Apply one of:

-server.log.error({message: "Error during database initialization", detail: error});
+server.log.error(error as Error, "Error during database initialization");

or

-server.log.error({message: "Error during database initialization", detail: error});
+server.log.error({ err: error }, "Error during database initialization");

Consider aligning other catch blocks to the same pattern for consistency.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
server.log.error({message: "Error during database initialization", detail: error});
server.log.error(error as Error, "Error during database initialization");
Suggested change
server.log.error({message: "Error during database initialization", detail: error});
server.log.error({ err: error }, "Error during database initialization");
🤖 Prompt for AI Agents
In platforms/registry/src/index.ts around line 27, the catch block currently
logs the error as detail which prevents Pino/Fastify serializers from preserving
stack/type; change the log call to pass the Error under the `err` key or pass
the error as the first argument (e.g. server.log.error({err: error, message:
"Error during database initialization"}) or server.log.error(error, "Error
during database initialization")) so serializers capture stack/metadata, and
update other catch blocks to the same pattern for consistency.

@coodos coodos merged commit 33c6798 into main Aug 12, 2025
0 of 4 checks passed
@coodos coodos deleted the feat/evault-logging branch August 12, 2025 05:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants